Select, poll, and Epoll are the mechanisms of IO multiplexing, but their mechanisms are very different.1. SelectAt the beginning of the select mechanism, it is necessary to copy the Fd_set from the user space to the kernel space, and the number of FD detected is limited, set by fd_setsize, typically 1024. When testing, according to timeout, traversing the Fd_set table, the active FD (can read or write or error), copied to the user space,Then the related FD is processed in the user space in turn.
Iocp/epool is used to receive notifications when communicating with devices.
The files that the device accesses to the hard disk are related to the file system driver, while the socket accesses are related to the Network Driver (software> hardware.
The operating mode of the OS driver is generally to send a command, wait for the command to complete and obtain the result.
The level is similar.
User
-----------------------
Kernel
There can be a numb
the difference between Epool and select:Select the maximum FD opened in a process is limited, set by Fd_setsize, and the default value is 2048. However, Epoll does not have this restriction, it supports the FD limit is the maximum number of open files, this number is generally far greater than 2048, generally speaking, the larger the memory, the higher the FD cap, 1G memory can reach about 10w. Select polling mechanism is the system will go to find ea
epoll_wait, the ready socket is copied to the user-state memory, then the ready list is emptied, and finally, the Epoll_ Wait to do a thing, is to check these sockets, if it is not the ET mode (is the handle of the LT mode), and these sockets do have unhandled events, and then put the handle back to just emptied the ready-made list. Therefore, the handle of the non-ET, as long as there is an event above it, Epoll_wait will return every time. The handle of the ET pattern, unless there is a new i
Original address: http://www.cnblogs.com/haippy/archive/2012/01/09/2317269.html
------------------------
Epoll Introduction
Epoll is an extensible IO event-handling mechanism in the Linux kernel that was first introduced in the Linux 2.5.44 kernel and can be used instead of POSIX select and poll system calls, and can achieve better performance with a large number of application requests ( The number of file descriptors being monitored is very large, unlike the O (n) required to complete operati
EPOLL-I/O Event Notification Facility
In Linux network programming, it is a long time to use Select to do event triggering. In the new Linux kernel, there is a mechanism to replace it, which is epoll.
The biggest advantage over Select,epoll is
connection pool approach cannot solve the problem, such as an index page request, it will contain dozens of ancillary resource files. If the cilent network is slow, the dozens of connections will be blocked for a long time, and the users will not be able to stand it if they have more than one server, because the thread overhead is very large, if it cannot be quickly released, it will bring disastrous to the server. For public network services, this will be especially obvious afterwards. Obvious
1024 NBSP;2 only). Know that there are events coming, but also to traverse which FD trigger 3). You cannot dynamically modify the Fdset, or turn off a socket 4), and need to maintain a data structure to hold a large number of FD. This allows the user space and kernel space to replicate overhead when passing the structure: (select so old, do we still need it?) ) 1). Better compatible with old systems 2). The Select calculation time-out can reach nanosecond accuracy, while poll, Epoll can
returned results are blocked in the main logic thread.
The only thread in the login server, that is, the main loop thread, performs the Select Operation on the listening socket to read and process the data on each connected client immediately, it ends when the server receives the SIGABRT or sigbreak signal.
Therefore, the logic of the main loop of the mangos logon server also includes the logic of the subsequent game server. The key code of the main loop is actually in sockethandler, that is, t
exclusive when multiple threads operate on one piece of memory simultaneously. Otherwise, the memory in the memory will be unpredictable.
8. Use non-thread-safe function calls
9 In a signal environment, you can use non-reentrant function calls. These functions read or write data in a certain memory area. When the signal is interrupted, the memory write operation will be interrupted, the next entry will inevitably cause errors.
10 cross-process transmission of an address
11 some system calls wit
is alive Netty is based on this mode, a thread can listen to a lot of IO operations, so in the IO wait on more efficient. The implementation is dependent on the operating system, and Windows and Linux have different implementations. The initial select or poll has a concurrency limit, and the NIO select has a problem with the empty rotation, and epool breaks the limit of the number of connections, and one thread can listen to a lot of IO operations. T
still y, but must also repeat the last sent ack=x+1.
4 The Fourth step: Host A must issue a confirmation, will ACK set 1,ack=y+1, and its own serial number is x+1. This will release the connection from B to a in the opposite direction. Host A's TCP is then reported to its application process, and the entire connection has been released.
10. Details of state transitions in the various processes in which TCP establishes a connection and disconnects:
Client: Active Open syn_sent->established-> act
replace apache now? why is there a much better problem of nginx processing concurrency than apache?
Apache adopts the select Model of the earlier linux kernel version, while nginx adopts the epool model of version 2.6 +.
Back to the specific issue you mentioned, if the server waits for the processing result of customer a to accept the request of Customer B when Customer B visits, this is the way the select model works. After receiving the request,
When doing network service, it is necessary to write concurrent service-side programs. The stability of the front-end client application is partly dependent on the client itself, and more depends on whether the server is fast enough and stable enough. The common Linux concurrent Server model;
Multi-Process Concurrent server
Multi-threaded Concurrent server
Select multi-channel I/O transfer server
Poll Multi-channel I/O forwarding server
explanation
Install and apply Memcached
Use Nginx + Memcached's small image storage solution
Getting started with Memcached
I. Basic Environment preparation
[Root @ bkjia ~] # Yum-y install gcc-c ++
2. Service deployment
1. Install libevent
Libevent is a library that encapsulates the epool of linux, kqueue of BSD operating systems, and other event processing functions into a unified interface. Even if the number of connections to the server increases
;Worker_cpu_affinity 0101 1010;Represents the start of two worker processes, binds the first process to cpu0 and CPU2, and binds the second process to CPU1 and CPU3.3, Ssl_engine DEVICE;Specify a hardware SSL device on a server with SSL hardware acceleration to allow the hardware device to maintain an SSL session4, Timer_resolution INTERVAL;Each time the kernel's event call (if epool) returns, it uses Gettimeofday () to update the nginx cache clock, w
extension policies. There are some differences in implementation. The general hash is relatively simple and can be hashed using the built-in nginx variable as the key, consistent hash uses the nginx built-in consistent hash ring and supports memcache.3. Comparison and Test
This test compares the balance, consistency, and Disaster Tolerance of each policy, analyzes the differences, and provides applicable scenarios accordingly. In order to comprehensively and objectively test the nginx load bala
When doing network service, it is necessary to write concurrent service-side programs. The stability of the front-end client application is partly dependent on the client itself, and more depends on whether the server is fast enough and stable enough.The common Linux concurrent Server model;
Multi-Process Concurrent server
Multi-threaded Concurrent server
Select multi-channel I/O transfer server
Poll Multi-channel I/O forwarding server
so simple, so use caution.
2.4. Common hash, consistent hash
These two are also expansion strategy, in the specific implementation of some differences, common hash is relatively simple, you can nginx built-in variables for the key hash, consistent hash using the Nginx built-in consistent hash ring, can support memcache.
3. Comparative test
This test is mainly to compare the balance, consistency, disaster tolerance of each strategy, so as to analyze the differences, and give their respective
the force, a single station on the arbitrary.
Export gpu_force_64bit_ptr=0Export gpu_max_heap_size=100Export Gpu_use_sync_objects=1Export gpu_max_alloc_percent=100Export gpu_single_alloc_percent=100
./ethdcrminer64-epool Eth.f2pool.com:8080-ewal 0XD69AF2A796A737A103F12D2F0BCC563A13900E6F-EPSW x-eworker 001-mode 1 -ftime 10
Like the one above, for example.
You must remember to change your wallet into your wallet.
You must remember to change your walle
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.